Hmmm… you’re right. Looking inside the box does have some effect—it makes the wave function coherent, thus not allowing funky quantum effects which don’t also include the observer. However, it’s not obvious that this reduces the measure of the output in any sense. But it could.
So basically, I abandon that line of argument. Seems to me that the more important bit in that section is that any simplifying tricks which reduce the computational complexity of the simulation, also reduce the quantum branching factor. So there is still a significant range of computational power that UFAI could have which would be effectively omnipotent in any human sense, and even capable of simulation with trivial effort, but still unable to get a significant measure of quantum torture relative to normal reality. (As a separate point, though I’m certain that there are deterministic algorithms for [simulated] human intelligence, it’s quite possible that there are no quantum algorithms which are not just “build the human”.)
(Partly, of course, my own estimate of the computational power of AI is probably much lower than the general Less Wrong ideas on that. I have several arguments on this, but aside from the human’s argument near the end, they’re irrelevant here; and besides which, they’re suspect of being rationalizations.)
Hmmm… you’re right. Looking inside the box does have some effect—it makes the wave function coherent, thus not allowing funky quantum effects which don’t also include the observer. However, it’s not obvious that this reduces the measure of the output in any sense. But it could.
So basically, I abandon that line of argument. Seems to me that the more important bit in that section is that any simplifying tricks which reduce the computational complexity of the simulation, also reduce the quantum branching factor. So there is still a significant range of computational power that UFAI could have which would be effectively omnipotent in any human sense, and even capable of simulation with trivial effort, but still unable to get a significant measure of quantum torture relative to normal reality. (As a separate point, though I’m certain that there are deterministic algorithms for [simulated] human intelligence, it’s quite possible that there are no quantum algorithms which are not just “build the human”.)
(Partly, of course, my own estimate of the computational power of AI is probably much lower than the general Less Wrong ideas on that. I have several arguments on this, but aside from the human’s argument near the end, they’re irrelevant here; and besides which, they’re suspect of being rationalizations.)